Optimal Control
   HOME

TheInfoList



OR:

Optimal control theory is a branch of mathematical optimization that deals with finding a
control Control may refer to: Basic meanings Economics and business * Control (management), an element of management * Control, an element of management accounting * Comptroller (or controller), a senior financial officer in an organization * Controllin ...
for a
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space. Examples include the mathematical models that describe the swinging of a ...
over a period of time such that an
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a
spacecraft A spacecraft is a vehicle or machine designed to fly in outer space. A type of artificial satellite, spacecraft are used for a variety of purposes, including communications, Earth observation, meteorology, navigation, space colonization, p ...
with controls corresponding to rocket thrusters, and the objective might be to reach the
moon The Moon is Earth's only natural satellite. It is the fifth largest satellite in the Solar System and the largest and most massive relative to its parent planet, with a diameter about one-quarter that of Earth (comparable to the width of ...
with minimum fuel expenditure. Or the dynamical system could be a nation's
economy An economy is an area of the production, distribution and trade, as well as consumption of goods and services. In general, it is defined as a social domain that emphasize the practices, discourses, and material expressions associated with the ...
, with the objective to minimize
unemployment Unemployment, according to the OECD (Organisation for Economic Co-operation and Development), is people above a specified age (usually 15) not being in paid employment or self-employment but currently available for Work (human activity), w ...
; the controls in this case could be
fiscal Fiscal usually refers to government finance. In this context, it may refer to: Economics * Fiscal policy, use of government expenditure to influence economic development * Fiscal policy debate * Fiscal adjustment, a reduction in the government ...
and
monetary policy Monetary policy is the policy adopted by the monetary authority of a nation to control either the interest rate payable for very short-term borrowing (borrowing by banks from each other to meet their short-term needs) or the money supply, often a ...
. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory. Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of
Lev Pontryagin Lev Semenovich Pontryagin (russian: Лев Семёнович Понтрягин, also written Pontriagin or Pontrjagin) (3 September 1908 – 3 May 1988) was a Soviet mathematician. He was born in Moscow and lost his eyesight completely due ...
and
Richard Bellman Richard Ernest Bellman (August 26, 1920 – March 19, 1984) was an American applied mathematician, who introduced dynamic programming in 1953, and made important contributions in other fields of mathematics, such as biomathematics. He founde ...
in the 1950s, after contributions to calculus of variations by
Edward J. McShane Edward James McShane (May 10, 1904 – June 1, 1989) was an American mathematician noted for his advancements of the calculus of variations, integration (mathematics), integration theory, stochastic calculus, and exterior ballistics. ttps://www. ...
. Optimal control can be seen as a control strategy in control theory.


General method

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
of state and control variables. An optimal control is a set of
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, an ...
s describing the paths of the control variables that minimize the cost function. The optimal control can be derived using
Pontryagin's maximum principle Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it ...
(a
necessary condition In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of ...
also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the
Hamilton–Jacobi–Bellman equation In optimal control theory, the Hamilton-Jacobi-Bellman (HJB) equation gives a necessary and sufficient condition for optimality of a control with respect to a loss function. It is, in general, a nonlinear partial differential equation in the value ...
(a sufficient condition). We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to ''minimize'' the total traveling time? In this example, the term ''control law'' refers specifically to the way in which the driver presses the accelerator and shifts the gears. The ''system'' consists of both the car and the road, and the ''optimality criterion'' is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function. Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows. Minimize the continuous-time cost functional J textbf(\cdot), \textbf(\cdot), t_0, t_f:= E\, textbf(t_0),t_0,\textbf(t_f),t_f+ \int_^ F\, textbf(t),\textbf(t),t\,\mathrm dt subject to the first-order dynamic constraints (the state equation) \dot(t) = \textbf\, ,\textbf(t), \textbf(t), t the algebraic ''path constraints'' \textbf\, textbf(t),\textbf(t),t\leq \textbf, and the endpoint conditions \textbf textbf(t_0),t_0,\textbf(t_f),t_f= 0 where \textbf(t) is the ''state'', \textbf(t) is the ''control'', t is the independent variable (generally speaking, time), t_0 is the initial time, and t_f is the terminal time. The terms E and F are called the ''endpoint cost '' and the ''running cost'' respectively. In the calculus of variations, E and F are referred to as the Mayer term and the ''
Lagrangian Lagrangian may refer to: Mathematics * Lagrangian function, used to solve constrained minimization problems in optimization theory; see Lagrange multiplier ** Lagrangian relaxation, the method of approximating a difficult constrained problem with ...
'', respectively. Furthermore, it is noted that the path constraints are in general ''inequality'' constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution textbf^*(t),\textbf^*(t),t_0^*, t_f^*/math> to the optimal control problem is ''locally minimizing''.


Linear quadratic control

A special case of the general nonlinear optimal control problem given in the previous section is the ''linear quadratic'' (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the ''quadratic'' continuous-time cost functional J=\tfrac \mathbf^(t_f)\mathbf_f\mathbf(t_f) + \tfrac \int_^ ,\mathbf^(t)\mathbf(t)\mathbf(t) + \mathbf^(t)\mathbf(t) \mathbf(t), \mathrm dt Subject to the ''linear'' first-order dynamic constraints \dot(t)= \mathbf(t) \mathbf(t) + \mathbf(t) \mathbf(t), and the initial condition \mathbf(t_0) = \mathbf_0 A particular form of the LQ problem that arises in many control system problems is that of the ''linear quadratic regulator'' (LQR) where all of the matrices (i.e., \mathbf, \mathbf, \mathbf, and \mathbf) are ''constant'', the initial time is arbitrarily set to zero, and the terminal time is taken in the limit t_f\rightarrow\infty (this last assumption is what is known as ''infinite horizon''). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional J= \tfrac \int_^ mathbf^(t)\mathbf\mathbf(t) + \mathbf^(t)\mathbf\mathbf(t), \mathrm dt Subject to the ''linear time-invariant'' first-order dynamic constraints \dot(t) = \mathbf \mathbf(t) + \mathbf \mathbf(t), and the initial condition \mathbf(t_0) = \mathbf_0 In the finite-horizon case the matrices are restricted in that \mathbf and \mathbf are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the
matrices Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
\mathbf and \mathbf are not only positive-semidefinite and positive-definite, respectively, but are also ''constant''. These additional restrictions on \mathbf and \mathbf in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is ''bounded'', the additional restriction is imposed that the pair (\mathbf,\mathbf) is ''
controllable Controllability is an important property of a control system, and the controllability property plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control. Controllability and observabi ...
''. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the ''control energy'' (measured as a quadratic form). The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved ''after'' the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form \mathbf(t) = -\mathbf(t)\mathbf(t) where \mathbf(t) is a properly dimensioned matrix, given as \mathbf(t) = \mathbf^\mathbf^\mathbf(t), and \mathbf(t) is the solution of the differential
Riccati equation In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form : y'(x) = q_0(x) + q_1(x) \, y(x) + q_2(x) \, y^2( ...
. The differential Riccati equation is given as \dot(t) = -\mathbf(t)\mathbf-\mathbf^ \mathbf(t) +\mathbf(t)\mathbf\mathbf^\mathbf^\mathbf(t) - \mathbf For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition \mathbf(t_f) = \mathbf_f For the infinite horizon LQR problem, the differential Riccati equation is replaced with the ''algebraic'' Riccati equation (ARE) given as \mathbf = -\mathbf\mathbf-\mathbf^\mathbf+\mathbf\mathbf\mathbf^\mathbf^\mathbf-\mathbf Understanding that the ARE arises from infinite horizon problem, the matrices \mathbf, \mathbf, \mathbf, and \mathbf are all ''constant''. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the ''positive definite'' (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by
Rudolf E. Kálmán Rudolf Emil Kálmán (May 19, 1930 – July 2, 2016) was a Hungarian Americans, Hungarian-American electrical engineer, mathematician, and inventor. He is most noted for his co-invention and development of the Kalman filter, a mathematical algo ...
.


Numerical methods for optimal control

Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control ( 1950s to 1980s) the favored approach for solving optimal control problems was that of ''indirect methods''. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
. Thus, the resulting
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space. Examples include the mathematical models that describe the swinging of a ...
is a
Hamiltonian system A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can ...
of the form \begin \dot & = \frac \\ .2ex\dot & = -\frac \end where H= F +\boldsymbol^\textbf- \boldsymbol^\textbf is the ''augmented Hamiltonian'' and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or ''transversality'' conditions). The beauty of using an indirect method is that the state and adjoint (i.e., \boldsymbol) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO. The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called ''direct methods''. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a ''cost function''. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form: Minimize F(\mathbf) subject to the algebraic constraints \begin \mathbf(\mathbf) & = \mathbf \\ \mathbf(\mathbf) & \leq \mathbf \end Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control) or may be quite large (e.g., a direct
collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is ''sparse'' and many well-known software programs exist (e.g.,
SNOPT SNOPT, for Sparse Nonlinear OPTimizer, is a software package for solving large-scale nonlinear optimization problems written by Philip Gill, Walter Murray and Michael Saunders. SNOPT is mainly written in Fortran, but interfaces to C, C++, Pyth ...
) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct ''collocation methods'' which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include ''DIRCOL'', SOCS, OTIS, GESOP/
ASTOS ASTOS is a tool dedicated to mission analysis, Trajectory optimization, vehicle design and simulation for space scenarios, i.e. launch, re-entry missions, orbit transfers, Earth observation, navigation, coverage and re-entry safety assessments. ...
, DITAN. and PyGMO/PyKEP. In recent years, due to the advent of the
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation ...
programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include ''RIOTS'', ''
DIDO Dido ( ; , ), also known as Elissa ( , ), was the legendary founder and first queen of the Phoenician city-state of Carthage (located in modern Tunisia), in 814 BC. In most accounts, she was the queen of the Phoenician city-state of Tyre (t ...
'', ''DIRECT'', FALCON.m, and ''GPOPS,'' while an example of an industry developed MATLAB tool is ''
PROPT The PROPT MATLAB Optimal Control Software is a new generation platform for solving applied optimal control (with ODE or DAE formulation) and parameters estimation problems. The platform was developed by MATLAB Programming Contest WinnerPer Rut ...
''. These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as
TOMLAB Tomlab is a German record label based in Köln. It has released works by bands such as The Books, Casiotone for the Painfully Alone, Deerhoof, Thee Oh Sees, Les Georges Leningrad, and acts associated with Blocks Recording Club, such as Fi ...
have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN.


Discrete-time optimal control

The examples thus far have shown
continuous time In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled. Discrete time Discrete time views values of variables as occurring at distinct, separate "po ...
systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with
discrete time In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled. Discrete time Discrete time views values of variables as occurring at distinct, separate "po ...
systems and solutions. The Theory of Consistent Approximations provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method
RIOTS
' is based on the Theory of Consistent Approximation.


Examples

A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the
shadow price A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in o ...
) \lambda(t). The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when \lambda(t) can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values. Having obtained \lambda(t), the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of \lambda(t). Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.


Finite time

Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date 0 to date T. At date 0 there is x_0 ore in the ground, and the time-dependent amount of ore x(t) left in the ground declines at the rate of u(t) that the mine owner extracts it. The mine owner extracts ore at cost u(t)^2/x(t) (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price p. Any ore left in the ground at time T cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time u(t) to maximize profits over the period of ownership with no time discounting.


See also

* Active inference *
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
* Bellman pseudospectral method *
Brachistochrone In physics and mathematics, a brachistochrone curve (), or curve of fastest descent, is the one lying on the plane between a point ''A'' and a lower point ''B'', where ''B'' is not directly below ''A'', on which a bead slides frictionlessly under ...
*
DIDO Dido ( ; , ), also known as Elissa ( , ), was the legendary founder and first queen of the Phoenician city-state of Carthage (located in modern Tunisia), in 814 BC. In most accounts, she was the queen of the Phoenician city-state of Tyre (t ...
* DNSS point *
Dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
*
Gauss pseudospectral method The Gauss pseudospectral method (GPM), one of many topics named after Carl Friedrich Gauss, is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method di ...
*
Generalized filtering Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" a ...
* GPOPS-II *
CasADi CasADi is a free and open source symbolic framework for automatic differentiation and optimal control.Joel Andersson, Johan Åkesson, Moritz Diehl: "CasADi - A symbolic package for automatic differentiation and optimal control". Recent Advances in ...
*
JModelica.org JModelica.org is a commercial software platform based on the Modelica modeling language for modeling, simulating, optimizing and analyzing complex dynamic systems. The platform is maintained and developed by Modelon AB in collaboration with academi ...
(Modelica-based open source platform for dynamic optimization) *
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estima ...
* Linear-quadratic regulator *
Model Predictive Control Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In ...
* Overtaking criterion *
PID controller A proportional–integral–derivative controller (PID controller or three-term controller) is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuou ...
* PROPT (Optimal Control Software for MATLAB) * Pseudospectral optimal control * Pursuit-evasion games *
Sliding mode control In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal (or more rigorously, a set-valued control signal) that forces the system to ...
*
SNOPT SNOPT, for Sparse Nonlinear OPTimizer, is a software package for solving large-scale nonlinear optimization problems written by Philip Gill, Walter Murray and Michael Saunders. SNOPT is mainly written in Fortran, but interfaces to C, C++, Pyth ...
* Stochastic control *
Trajectory optimization Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop ...


References


Further reading

* * * * *


External links


Computational Optimal Control
* Dr. Benoît CHACHUAT

– Nonlinear Programming, Calculus of Variations and Optimal Control.


GEKKO - Python package for optimal control

GESOP – Graphical Environment for Simulation and OPtimization

GPOPS-II – General-Purpose MATLAB Optimal Control Software

CasADi – Free and open source symbolic framework for optimal control

PROPT – MATLAB Optimal Control Software

OpenOCL – Open Optimal Control Library
* Elmer G. Wiens
Optimal Control
– Applications of Optimal Control Theory Using the Pontryagin Maximum Principle with interactive models.
Pontryagin's Principle Illustrated with Examples

On Optimal Control
by Yu-Chi Ho
Pseudospectral optimal control: Part 1

Pseudospectral optimal control: Part 2
{{DEFAULTSORT:Optimal Control Mathematical optimization